18 research outputs found

    Analysis of Recurrent Neural Networks for Henon Simulated Time-Series Forecasting

    Get PDF
    Forecasting of chaotic time-series has increasingly become a challenging subject. Non-linear models such as recurrent neural networks have been successfully applied in generating short term forecasts, but perform poorly in long term forecasts due to the vanishing gradient problem when the forecasting period increases. This study proposes a robust model that can be applied in long term forecasting of henon chaotic time-series whilst reducing the vanishing gradient problem through enhancing the models ability in learning of long-term dependencies. The proposed hybrid model is tested using henon simulated chaotic time-series data. Empirical analysis is performed using quantitative forecasting metrics and comparative model performance on the generated forecasts. Performance evaluation results confirm that the proposed recurrent model performs long term forecasts on henon chaotic time-series effectively in terms of error metrics compared to existing forecasting models

    Towards Wearable Augmented Reality in Healthcare: A Comparative Survey and Analysis of Head-Mounted Displays

    Get PDF
    Head-mounted displays (HMDs) have the potential to greatly impact the surgical field by maintaining sterile conditions in healthcare environments. Google Glass (GG) and Microsoft HoloLens (MH) are examples of optical HMDs. In this comparative survey related to wearable augmented reality (AR) technology in the medical field, we examine the current developments in wearable AR technology, as well as the medical aspects, with a specific emphasis on smart glasses and HoloLens. The authors searched recent articles (between 2017 and 2022) in the PubMed, Web of Science, Scopus, and ScienceDirect databases and a total of 37 relevant studies were considered for this analysis. The selected studies were divided into two main groups; 15 of the studies (around 41%) focused on smart glasses (e.g., Google Glass) and 22 (59%) focused on Microsoft HoloLens. Google Glass was used in various surgical specialities and preoperative settings, namely dermatology visits and nursing skill training. Moreover, Microsoft HoloLens was used in telepresence applications and holographic navigation of shoulder and gait impairment rehabilitation, among others. However, some limitations were associated with their use, such as low battery life, limited memory size, and possible ocular pain. Promising results were obtained by different studies regarding the feasibility, usability, and acceptability of using both Google Glass and Microsoft HoloLens in patient-centric settings as well as medical education and training. Further work and development of rigorous research designs are required to evaluate the efficacy and cost-effectiveness of wearable AR devices in the future

    AN UNFAIR SEMI-GREEDY REAL-TIME MULTIPROCESSOR SCHEDULING ALGORITHM.

    No full text
    Optimal real-time multiprocessor scheduling algorithms always achieve higher processor utilization that is equal to the number of processors in the system. However, optimality always comes at the expense of scheduling overheads in terms of task preemptions and migrations which highly affect the practicability of the algorithm. This is because most of these algorithms achieve optimality by adhering to the fairness rule, in which tasks are forced to make progress in their executions at each time quanta or at the end of each time slice in a fluid schedule model; which corresponds to the deadlines of all tasks in the system. These preemptions and migrations add extra overheads that must be added to the worst case execution requirements of a task

    A Framework for Mitigating DDoS and DOS Attacks in IoT Environment Using Hybrid Approach

    No full text
    The Internet of Things (IoT) has gained remarkable acceptance from millions of individuals. This is evident in the extensive use of intelligent devices such as smartphones, smart television, speakers, air conditioning, lighting, and high-speed networks. The general application area of IoT includes industries, hospitals, schools, homes, sports, oil and gas, automobile, and entertainment, to mention a few. However, because of the unbounded connection of IoT devices and the lack of a specific method for overseeing communication, security concerns such as distributed denial of service (DDoS), denial of service (DoS), replay, botnet, social engineering, man-in-the-middle, and brute force attacks have posed enormous challenges in the IoT environment. Regarding these enormous challenges, this study focuses on DDoS and DoS attacks. These two attacks have the most severe consequences in the IoT environment. The solution proposed in this study can also help future researchers tackle the expansion of IoT security threats. Moreover, the study conducts rigorous experiments to assess the efficiency of the proposed approach. In summary, the experimental results show that the proposed hybrid approach mitigates data exfiltration caused by DDoS and DoS attacks by 95.4%, with average network lifetime, energy consumption, and throughput improvements of 15%, 25%, and 60%, respectively

    Asynchronous Non-Blocking Algorithm to Handle Straggler Reduce Tasks in Hadoop System

    No full text
    Hadoop is widely adopted as a big data processing application as it can run on commercial hardware at a reasonable time. Hadoop uses asynchronous blocking concurrency using Thread and Future class. Therefore, in some cases such as network link or hardware failure, a running task may block other tasks from running (the task becomes straggler). Hadoop releases are equipped with algorithms to handle straggler tasks problem. However, the algorithms manage Map and Reduce task similarly, while the straggler root cause might be different for both tasks. In this paper, the Asynchronous Non-Blocking (ANB) method is proposed to improve the performance and avoid the blocking of Reduce task in Hadoop. Instead of using the single queue, our approach uses two queues, i.e. task queue and callback queue. When a task is not ready or detected as a straggler, it is removed from the main task queue and temporarily sent to the callback queue. When the task is ready to run, it will be sent back to the main task queue for running. The performance of the algorithm is compared with rTuner, the latest paper found on handling straggler task in Reduce task. From the comparison, it is shown that ANB consistently gives faster time to complete because any unready tasks will be directly put into the callback queue without blocking other tasks. Furthermore, the overhead time in rTuner is high as it needs to check the straggler status and to find the reason for a task to become straggler

    Deep-Learning Based Prognosis Approach for Remaining Useful Life Prediction of Turbofan Engine

    No full text
    The entire life cycle of a turbofan engine is a type of asymmetrical process in which each engine part has different characteristics. Extracting and modeling the engine symmetry characteristics is significant in improving remaining useful life (RUL) predictions for aircraft components, and it is critical for an effective and reliable maintenance strategy. Such predictions can improve the maximum operating availability and reduce maintenance costs. Due to the high nonlinearity and complexity of mechanical systems, conventional methods are unable to satisfy the needs of medium- and long-term prediction problems and frequently overlook the effect of temporal information on prediction performance. To address this issue, this study presents a new attention-based deep convolutional neural network (DCNN) architecture to predict the RUL of turbofan engines. The prognosability metric was used for feature ranking and selection, whereas a time window method was employed for sample preparation to take advantage of multivariate temporal information for better feature extraction by means of an attention-based DCNN model. The validation of the proposed model was conducted using a well-known benchmark dataset and evaluation measures such as root mean square error (RMSE) and asymmetric scoring function (score) were used to validate the proposed approach. The experimental results show the superiority of the proposed approach to predict the RUL of a turbofan engine. The attention-based DCNN model achieved the best scores on the FD001 independent testing dataset, with an RMSE of 11.81 and a score of 223

    Prediction of oil and gas pipeline failures through machine learning approaches: A systematic review

    No full text
    Pipelines are vital for transporting oil and gas, but leaks can have serious consequences such as fires, injuries, pollution, and property damage. Therefore, preserving pipeline integrity is crucial for a safe and sustainable energy supply. The rapid progress of machine learning (ML) technologies provides an advantageous opportunity to develop predictive models that can effectively tackle these challenges. This review article mainly focuses on the novelty of using machine and deep learning techniques, specifically artificial neural networks (ANNs), support vector machines (SVMs) and hybrid machine learning (HML) algorithms, for predicting different pipeline failures in the oil and gas industry. In contrast to existing noncomprehensive reviews on pipeline defects, this article explicitly addresses the application of ML techniques, parameters, and data reliability for this purpose. The article surveys research in this specific area, offering a coherent discussion and identifying the motivations and challenges associated with using ML for predicting different types of defects in pipelines. This review also includes a bibliometric analysis of the literature, highlighting common ML techniques, investigated failures, and experimental tests. It also provides in-depth details, summarized in tables, on different failure types, commonly used ML algorithms, and data resources, with critical discussions. Based on a comprehensive review aforementioned, it was found that ML approaches, specifically ANNs and SVMs, can accurately predict oil and gas pipeline failures compared to conventional methods. However, it is highly recommended to combine multiple ML algorithms to enhance accuracy and prediction time further. Comparing ML predictive models based on field, experimental, and simulation data for various pipeline failures can establish reliable and cost-effective monitoring systems for the entire pipeline network. This systematic review is expected to aid in understanding the existing research gaps and provide options for other researchers interested in predicting oil and gas pipeline failures

    An Ensemble One Dimensional Convolutional Neural Network with Bayesian Optimization for Environmental Sound Classification

    No full text
    With the growth of deep learning in various classification problems, many researchers have used deep learning methods in environmental sound classification tasks. This paper introduces an end-to-end method for environmental sound classification based on a one-dimensional convolution neural network with Bayesian optimization and ensemble learning, which directly learns features representation from the audio signal. Several convolutional layers were used to capture the signal and learn various filters relevant to the classification problem. Our proposed method can deal with any audio signal length, as a sliding window divides the signal into overlapped frames. Bayesian optimization accomplished hyperparameter selection and model evaluation with cross-validation. Multiple models with different settings have been developed based on Bayesian optimization to ensure network convergence in both convex and non-convex optimization. An UrbanSound8K dataset was evaluated for the performance of the proposed end-to-end model. The experimental results achieved a classification accuracy of 94.46%, which is 5% higher than existing end-to-end approaches with fewer trainable parameters. Four measurement indices, namely: sensitivity, specificity, accuracy, precision, recall, F-measure, area under ROC curve, and the area under the precision-recall curve were used to measure the model performance. The proposed approach outperformed state-of-the-art end-to-end approaches that use hand-crafted features as input in selected measurement indices and time complexity

    Prediction of oil and gas pipeline failures through machine learning approaches: A systematic review

    No full text
    Pipelines are vital for transporting oil and gas, but leaks can have serious consequences such as fires, injuries, pollution, and property damage. Therefore, preserving pipeline integrity is crucial for a safe and sustainable energy supply. The rapid progress of machine learning (ML) technologies provides an advantageous opportunity to develop predictive models that can effectively tackle these challenges. This review article mainly focuses on the novelty of using machine and deep learning techniques, specifically artificial neural networks (ANNs), support vector machines (SVMs) and hybrid machine learning (HML) algorithms, for predicting different pipeline failures in the oil and gas industry. In contrast to existing noncomprehensive reviews on pipeline defects, this article explicitly addresses the application of ML techniques, parameters, and data reliability for this purpose. The article surveys research in this specific area, offering a coherent discussion and identifying the motivations and challenges associated with using ML for predicting different types of defects in pipelines. This review also includes a bibliometric analysis of the literature, highlighting common ML techniques, investigated failures, and experimental tests. It also provides in-depth details, summarized in tables, on different failure types, commonly used ML algorithms, and data resources, with critical discussions. Based on a comprehensive review aforementioned, it was found that ML approaches, specifically ANNs and SVMs, can accurately predict oil and gas pipeline failures compared to conventional methods. However, it is highly recommended to combine multiple ML algorithms to enhance accuracy and prediction time further. Comparing ML predictive models based on field, experimental, and simulation data for various pipeline failures can establish reliable and cost-effective monitoring systems for the entire pipeline network. This systematic review is expected to aid in understanding the existing research gaps and provide options for other researchers interested in predicting oil and gas pipeline failures.Pavement Engineerin
    corecore